Semi-supervised Learning Based on Joint Diffusion of Graph Functions and Laplacians
نویسنده
چکیده
In existing anisotropic diffusion-based semi-supervised learning approaches, anisotropic graph Laplacian is estimated based on (potentially noisy) function evaluations. We propose to regularize the graph Laplacian estimates. We develop a framework that regularizes the Laplace-Beltrami operators on Riemannian manifolds, and discretize it to a regularizer on diffusivity operators on graphs. Isotropic Laplace-Beltrami operator ∆ on a Riemannian manifold (M, g) with a metric g is a second-order differential operator: ∆f = ∇g∗∇gf, where ∇ and ∇g∗ are the gradient and divergence operators, respectively. ∆ generates the diffusion process on M : ∂f ∂t = −∆f. Anisotropic Laplace-Beltrami operator ∆ is defined based on a symmetric positive definite diffusivity operator D: ∆f = ∇g∗D∇gf. D controls the strength and direction of diffusion at each point x on M . Regularizing ∆ by regularizing D as a surrogate: 1) Kernel-based ∆ representation [HAL05]: A consistent kernel-based estimate ∆ghf : [∆ghf ](x) = 1 h2 ( f(x)− [Agh(x)f ] dh(x) ) , where [Agh(x)f ] = ∫ M kh(x, y)f(y)dV (x), dh(x) = [A g h(x)1], dV (x) = √ |det(g)|dx (g: g’s coordinate matrix), and kh(x, y) = { 1 hm k(‖i(x)− i(y)‖ 2 Rm , h ) if ‖i(x)− i(y)‖Rm ≤ h 0 otherwise with k(a, b) = exp (−a/b) and i being the embedding of M into R. ⇒ The spatial variation of ∆ is entirely determined by the metric g. 2) Equivalence of metric and diffusivity operator on manifolds: Proposition 1 (KTP15) The anisotropic Laplacian operator ∆ on a compact Riemannian manifold (M, g) is equivalent to the Laplace-Beltrami operator ∆ on (M, g) with a new metric g depending on D. When the diffusivity operator D is uniformly positive definite, g is explicitly obtained as c(x)g(x) = g(x)D−1(x), where g(x) and D(x) are the coordinate matrices of g and D at each point x, and c(x) = √ detg(x)/ √ detg(x). ⇒ Anisotropic diffusion on (M, g) is isotropic diffusion on M with a new metric g. Discretization: On a weighted graph (X,E,W ) with nodes X = {x1, . . . ,xu}, edges {Ei} = {eij} ⊂ X × X, non-negative similarities wij := w(eij) ∈ W , and the space of functions H(Ei) on Ei, the local graph diffusivity operator Di : H(Ei) → H(Ei) is defined as: Di := ∑ {j:(j,i)∈Ei} qijbij ⊗ bij ⇔ [DiS](eij) = qijbij 〈bij , S〉 ,∀S ∈ H(Ei), ⊗: the tensor product; basis function bij := 1ij ∈ H(E). The anisotropic graph Laplacian is defined as: [Lf ](xi) := [∇iDi∇if ](xi) = 1 di u ∑ j=1 wijqij f(xi)− 1 di u ∑
منابع مشابه
A Combinatorial View of Graph Laplacians
Discussions about different graph Laplacians—mainly the normalized and unnormalized versions of graph Laplacian—have been ardent with respect to various methods of clustering and graph based semi-supervised learning. Previous research in the graph Laplacians, from a continuous perspective, investigated the convergence properties of the Laplacian operators on Riemannian Manifolds. In this paper,...
متن کاملCombining Graph Laplacians for Semi-Supervised Learning
A foundational problem in semi-supervised learning is the construction of a graph underlying the data. We propose to use a method which optimally combines a number of differently constructed graphs. For each of these graphs we associate a basic graph kernel. We then compute an optimal combined kernel. This kernel solves an extended regularization problem which requires a joint minimization over...
متن کاملA Combinatorial View of the Graph Laplacians
Discussions about different graph Laplacians, mainly normalized and unnormalized versions of the graph Laplacians, have been ardent with respect to various methods in clustering and graph based semi-supervised learning. Previous research on the graph Laplacians investigated their convergence properties to Laplacian operators on continuous manifolds. There is still no strong proof on convergence...
متن کاملManifold Denoising
We consider the problem of denoising a noisily sampled submanifold M in R, where the submanifold M is a priori unknown and we are only given a noisy point sample. The presented denoising algorithm is based on a graph-based diffusion process of the point sample. We analyze this diffusion process using recent results about the convergence of graph Laplacians. In the experiments we show that our m...
متن کاملNonparametric Transforms of Graph Kernels for Semi-Supervised Learning
We present an algorithm based on convex optimization for constructing kernels for semi-supervised learning. The kernel matrices are derived from the spectral decomposition of graph Laplacians, and combine labeled and unlabeled data in a systematic fashion. Unlike previous work using diffusion kernels and Gaussian random field kernels, a nonparametric kernel approach is presented that incorporat...
متن کاملLarge Graph Construction for Scalable Semi-Supervised Learning
In this paper, we address the scalability issue plaguing graph-based semi-supervised learning via a small number of anchor points which adequately cover the entire point cloud. Critically, these anchor points enable nonparametric regression that predicts the label for each data point as a locally weighted average of the labels on anchor points. Because conventional graph construction is ineffic...
متن کامل